97 research outputs found

    Data Privacy Beyond Differential Privacy

    Get PDF
    Computing technologies today have made it much easier to gather personal data, ranging from GPS locations to medical records, from online behavior to social exchanges. As algorithms are constantly analyzing such detailed personal information for a wide range of computations, data privacy emerges as a paramount concern. As a strong, meaningful and rigorous notion of privacy, Differential Privacy has provided a powerful framework for designing data analysis algorithms with provable privacy guarantees. Over the past decade, there has been tremendous progress in the theory and algorithms for differential privacy, most of which consider the setting of centralized computation where a single, static database is subject to many data analyses. However, this standard framework does not capture many complex issues in modern computation. For example, the data might be distributed across self-interested agents, who may have incentive to misreport their data; and different individuals in the computation may have different expectations to privacy. The goal of this dissertation is to bring the rich theory of differential privacy to several computational problems in practice. We start by studying the problem of private counting query release for high-dimensional data, for which there are well-known computational hardness results. Despite the worst-case intractability barrier, we provide a solution with practical empirical performances by leveraging powerful optimization heuristics. Then we tackle problems within different social and economic settings, where the standard notion of differential privacy is not applicable. To that end, we use the perspective of differential privacy to design algorithms with meaningful privacy guarantees. (1) We provide privacy-preserving algorithms for solving a family of economic optimization problems under a strong relaxation of the standard definition of differential privacy---joint differential privacy. (2) We also show that (joint) differential privacy can serve as a novel tool for mechanism design when solving these optimization problems: Under our private mechanisms, the agents are incentivized to behave truthfully. (3) Finally, we consider the problem of using social network metadata to guide a search for some class of targeted individuals (for whom we cannot provide any meaningful privacy guarantees). We give a new variant of differential privacy---protected differential privacy---that guarantees differential privacy only for a subgroup of protected individuals. Under this privacy notion, we provide a family of algorithms for searching targeted individuals in the network while ensuring the privacy for the protected (un-targeted) ones

    Incentivizing Exploration with Selective Data Disclosure

    Full text link
    We study the design of rating systems that incentivize (more) efficient social learning among self-interested agents. Agents arrive sequentially and are presented with a set of possible actions, each of which yields a positive reward with an unknown probability. A disclosure policy sends messages about the rewards of previously-chosen actions to arriving agents. These messages can alter agents' incentives towards exploration, taking potentially sub-optimal actions for the sake of learning more about their rewards. Prior work achieves much progress with disclosure policies that merely recommend an action to each user, but relies heavily on standard, yet very strong rationality assumptions. We study a particular class of disclosure policies that use messages, called unbiased subhistories, consisting of the actions and rewards from a subsequence of past agents. Each subsequence is chosen ahead of time, according to a predetermined partial order on the rounds. We posit a flexible model of frequentist agent response, which we argue is plausible for this class of "order-based" disclosure policies. We measure the success of a policy by its regret, i.e., the difference, over all rounds, between the expected reward of the best action and the reward induced by the policy. A disclosure policy that reveals full history in each round risks inducing herding behavior among the agents, and typically has regret linear in the time horizon TT. Our main result is an order-based disclosure policy that obtains regret O~(T)\tilde{O}(\sqrt{T}). This regret is known to be optimal in the worst case over reward distributions, even absent incentives. We also exhibit simpler order-based policies with higher, but still sublinear, regret. These policies can be interpreted as dividing a sublinear number of agents into constant-sized focus groups, whose histories are then revealed to future agents

    Approximately Stable, School Optimal, and Student-Truthful Many-to-One Matchings (via Differential Privacy)

    Full text link
    We present a mechanism for computing asymptotically stable school optimal matchings, while guaranteeing that it is an asymptotic dominant strategy for every student to report their true preferences to the mechanism. Our main tool in this endeavor is differential privacy: we give an algorithm that coordinates a stable matching using differentially private signals, which lead to our truthfulness guarantee. This is the first setting in which it is known how to achieve nontrivial truthfulness guarantees for students when computing school optimal matchings, assuming worst- case preferences (for schools and students) in large markets
    • …
    corecore